211 research outputs found
Space-Time Tradeoffs for Distributed Verification
Verifying that a network configuration satisfies a given boolean predicate is
a fundamental problem in distributed computing. Many variations of this problem
have been studied, for example, in the context of proof labeling schemes (PLS),
locally checkable proofs (LCP), and non-deterministic local decision (NLD). In
all of these contexts, verification time is assumed to be constant. Korman,
Kutten and Masuzawa [PODC 2011] presented a proof-labeling scheme for MST, with
poly-logarithmic verification time, and logarithmic memory at each vertex.
In this paper we introduce the notion of a -PLS, which allows the
verification procedure to run for super-constant time. Our work analyzes the
tradeoffs of -PLS between time, label size, message length, and computation
space. We construct a universal -PLS and prove that it uses the same amount
of total communication as a known one-round universal PLS, and factor
smaller labels. In addition, we provide a general technique to prove lower
bounds for space-time tradeoffs of -PLS. We use this technique to show an
optimal tradeoff for testing that a network is acyclic (cycle free). Our
optimal -PLS for acyclicity uses label size and computation space . We further describe a recursive space verifier for
acyclicity which does not assume previous knowledge of the run-time .Comment: Pre-proceedings version of paper presented at the 24th International
Colloquium on Structural Information and Communication Complexity (SIROCCO
2017
Node Labels in Local Decision
The role of unique node identifiers in network computing is well understood
as far as symmetry breaking is concerned. However, the unique identifiers also
leak information about the computing environment - in particular, they provide
some nodes with information related to the size of the network. It was recently
proved that in the context of local decision, there are some decision problems
such that (1) they cannot be solved without unique identifiers, and (2) unique
node identifiers leak a sufficient amount of information such that the problem
becomes solvable (PODC 2013).
In this work we give study what is the minimal amount of information that we
need to leak from the environment to the nodes in order to solve local decision
problems. Our key results are related to scalar oracles that, for any given
, provide a multiset of labels; then the adversary assigns the
labels to the nodes in the network. This is a direct generalisation of the
usual assumption of unique node identifiers. We give a complete
characterisation of the weakest oracle that leaks at least as much information
as the unique identifiers.
Our main result is the following dichotomy: we classify scalar oracles as
large and small, depending on their asymptotic behaviour, and show that (1) any
large oracle is at least as powerful as the unique identifiers in the context
of local decision problems, while (2) for any small oracle there are local
decision problems that still benefit from unique identifiers.Comment: Conference version to appear in the proceedings of SIROCCO 201
Navigability is a Robust Property
The Small World phenomenon has inspired researchers across a number of
fields. A breakthrough in its understanding was made by Kleinberg who
introduced Rank Based Augmentation (RBA): add to each vertex independently an
arc to a random destination selected from a carefully crafted probability
distribution. Kleinberg proved that RBA makes many networks navigable, i.e., it
allows greedy routing to successfully deliver messages between any two vertices
in a polylogarithmic number of steps. We prove that navigability is an inherent
property of many random networks, arising without coordination, or even
independence assumptions
On the Complexity of Local Distributed Graph Problems
This paper is centered on the complexity of graph problems in the
well-studied LOCAL model of distributed computing, introduced by Linial [FOCS
'87]. It is widely known that for many of the classic distributed graph
problems (including maximal independent set (MIS) and -vertex
coloring), the randomized complexity is at most polylogarithmic in the size
of the network, while the best deterministic complexity is typically
. Understanding and narrowing down this exponential gap
is considered to be one of the central long-standing open questions in the area
of distributed graph algorithms. We investigate the problem by introducing a
complexity-theoretic framework that allows us to shed some light on the role of
randomness in the LOCAL model. We define the SLOCAL model as a sequential
version of the LOCAL model. Our framework allows us to prove completeness
results with respect to the class of problems which can be solved efficiently
in the SLOCAL model, implying that if any of the complete problems can be
solved deterministically in rounds in the LOCAL model, we can
deterministically solve all efficient SLOCAL-problems (including MIS and
-coloring) in rounds in the LOCAL model. We show
that a rather rudimentary looking graph coloring problem is complete in the
above sense: Color the nodes of a graph with colors red and blue such that each
node of sufficiently large polylogarithmic degree has at least one neighbor of
each color. The problem admits a trivial zero-round randomized solution. The
result can be viewed as showing that the only obstacle to getting efficient
determinstic algorithms in the LOCAL model is an efficient algorithm to
approximately round fractional values into integer values
A general lower bound for collaborative tree exploration
We consider collaborative graph exploration with a set of agents. All
agents start at a common vertex of an initially unknown graph and need to
collectively visit all other vertices. We assume agents are deterministic,
vertices are distinguishable, moves are simultaneous, and we allow agents to
communicate globally. For this setting, we give the first non-trivial lower
bounds that bridge the gap between small () and large () teams of agents. Remarkably, our bounds tightly connect to existing results
in both domains.
First, we significantly extend a lower bound of
by Dynia et al. on the competitive ratio of a collaborative tree exploration
strategy to the range for any . Second,
we provide a tight lower bound on the number of agents needed for any
competitive exploration algorithm. In particular, we show that any
collaborative tree exploration algorithm with agents has a
competitive ratio of , while Dereniowski et al. gave an algorithm
with agents and competitive ratio , for any
and with denoting the diameter of the graph. Lastly, we
show that, for any exploration algorithm using agents, there exist
trees of arbitrarily large height that require rounds, and we
provide a simple algorithm that matches this bound for all trees
A simple and optimal ancestry labeling scheme for trees
We present a ancestry labeling scheme for trees. The
problem was first presented by Kannan et al. [STOC 88'] along with a simple solution. Motivated by applications to XML files, the label size was
improved incrementally over the course of more than 20 years by a series of
papers. The last, due to Fraigniaud and Korman [STOC 10'], presented an
asymptotically optimal labeling scheme using
non-trivial tree-decomposition techniques. By providing a framework
generalizing interval based labeling schemes, we obtain a simple, yet
asymptotically optimal solution to the problem. Furthermore, our labeling
scheme is attained by a small modification of the original solution.Comment: 12 pages, 1 figure. To appear at ICALP'1
Systematic Topology Analysis and Generation Using Degree Correlations
We present a new, systematic approach for analyzing network topologies. We
first introduce the dK-series of probability distributions specifying all
degree correlations within d-sized subgraphs of a given graph G. Increasing
values of d capture progressively more properties of G at the cost of more
complex representation of the probability distribution. Using this series, we
can quantitatively measure the distance between two graphs and construct random
graphs that accurately reproduce virtually all metrics proposed in the
literature. The nature of the dK-series implies that it will also capture any
future metrics that may be proposed. Using our approach, we construct graphs
for d=0,1,2,3 and demonstrate that these graphs reproduce, with increasing
accuracy, important properties of measured and modeled Internet topologies. We
find that the d=2 case is sufficient for most practical purposes, while d=3
essentially reconstructs the Internet AS- and router-level topologies exactly.
We hope that a systematic method to analyze and synthesize topologies offers a
significant improvement to the set of tools available to network topology and
protocol researchers.Comment: Final versio
Distributed Testing of Excluded Subgraphs
We study property testing in the context of distributed computing, under the
classical CONGEST model. It is known that testing whether a graph is
triangle-free can be done in a constant number of rounds, where the constant
depends on how far the input graph is from being triangle-free. We show that,
for every connected 4-node graph H, testing whether a graph is H-free can be
done in a constant number of rounds too. The constant also depends on how far
the input graph is from being H-free, and the dependence is identical to the
one in the case of testing triangles. Hence, in particular, testing whether a
graph is K_4-free, and testing whether a graph is C_4-free can be done in a
constant number of rounds (where K_k denotes the k-node clique, and C_k denotes
the k-node cycle). On the other hand, we show that testing K_k-freeness and
C_k-freeness for k>4 appear to be much harder. Specifically, we investigate two
natural types of generic algorithms for testing H-freeness, called DFS tester
and BFS tester. The latter captures the previously known algorithm to test the
presence of triangles, while the former captures our generic algorithm to test
the presence of a 4-node graph pattern H. We prove that both DFS and BFS
testers fail to test K_k-freeness and C_k-freeness in a constant number of
rounds for k>4
Randomized Local Network Computing
International audienceIn this paper, we carry on investigating the line of research questioning the power of randomization for the design of distributed algorithms. In their seminal paper, Naor and Stockmeyer [STOC 1993] established that, in the context of network computing, in which all nodes execute the same algorithm in parallel, any construction task that can be solved locally by a randomized Monte-Carlo algorithm can also be solved locally by a deterministic algorithm. This result however holds in a specific context. In particular, it holds only for distributed tasks whose solutions that can be locally checked by a deterministic algorithm. In this paper, we extend the result of Naor and Stockmeyer to a wider class of tasks. Specifically, we prove that the same derandomization result holds for every task whose solutions can be locally checked using a 2-sided error randomized Monte-Carlo algorithm. This extension finds applications to, e.g., the design of lower bounds for construction tasks which tolerate that some nodes compute incorrect values. In a nutshell, we show that randomization does not help for solving such resilient tasks
Networks become navigable as nodes move and forget
We propose a dynamical process for network evolution, aiming at explaining
the emergence of the small world phenomenon, i.e., the statistical observation
that any pair of individuals are linked by a short chain of acquaintances
computable by a simple decentralized routing algorithm, known as greedy
routing. Previously proposed dynamical processes enabled to demonstrate
experimentally (by simulations) that the small world phenomenon can emerge from
local dynamics. However, the analysis of greedy routing using the probability
distributions arising from these dynamics is quite complex because of mutual
dependencies. In contrast, our process enables complete formal analysis. It is
based on the combination of two simple processes: a random walk process, and an
harmonic forgetting process. Both processes reflect natural behaviors of the
individuals, viewed as nodes in the network of inter-individual acquaintances.
We prove that, in k-dimensional lattices, the combination of these two
processes generates long-range links mutually independently distributed as a
k-harmonic distribution. We analyze the performances of greedy routing at the
stationary regime of our process, and prove that the expected number of steps
for routing from any source to any target in any multidimensional lattice is a
polylogarithmic function of the distance between the two nodes in the lattice.
Up to our knowledge, these results are the first formal proof that navigability
in small worlds can emerge from a dynamical process for network evolution. Our
dynamical process can find practical applications to the design of spatial
gossip and resource location protocols.Comment: 21 pages, 1 figur
- …